Exponential convergence of testing error for stochastic gradient methods

نویسندگان

  • Loucas Pillaud-Vivien
  • Alessandro Rudi
  • Francis Bach
چکیده

We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing loss (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing error (classification error) converges exponentially fast if low-noise conditions are assumed.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets

We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments ...

متن کامل

An Analytical Model for Predicting the Convergence Behavior of the Least Mean Mixed-Norm (LMMN) Algorithm

The Least Mean Mixed-Norm (LMMN) algorithm is a stochastic gradient-based algorithm whose objective is to minimum a combination of the cost functions of the Least Mean Square (LMS) and Least Mean Fourth (LMF) algorithms. This algorithm has inherited many properties and advantages of the LMS and LMF algorithms and mitigated their weaknesses in some ways. The main issue of the LMMN algorithm is t...

متن کامل

Contraction analysis of nonlinear random dynamical systems

In order to bring contraction analysis into the very fruitful and topical fields of stochastic and Bayesian systems, we extend here the theory describes in [Lohmiller and Slotine, 1998] to random differential equations. We propose new definitions of contraction (almost sure contraction and contraction in mean square) which allow to master the evolution of a stochastic system in two manners. The...

متن کامل

Convergence of Numerical Method For the Solution of Nonlinear Delay Volterra Integral ‎Equations‎

‎‎In this paper, Solvability nonlinear Volterra integral equations with general vanishing delays is stated. So far sinc methods for approximating the solutions of Volterra integral equations have received considerable attention mainly due to their high accuracy. These approximations converge rapidly to the exact solutions as number sinc points increases. Here the numerical solution of nonlinear...

متن کامل

A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets

We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. Numerical experiments indicate that the new algorithm...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1712.04755  شماره 

صفحات  -

تاریخ انتشار 2017